Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
J Healthc Eng ; 2023: 4301745, 2023.
Article in English | MEDLINE | ID: covidwho-2259501

ABSTRACT

The infectious coronavirus disease (COVID-19) has become a great threat to global human health. Timely and rapid detection of COVID-19 cases is very crucial to control its spreading through isolation measures as well as for proper treatment. Though the real-time reverse transcription-polymerase chain reaction (RT-PCR) test is a widely used technique for COVID-19 infection, recent researches suggest chest computed tomography (CT)-based screening as an effective substitute in cases of time and availability limitations of RT-PCR. In consequence, deep learning-based COVID-19 detection from chest CT images is gaining momentum. Furthermore, visual analysis of data has enhanced the opportunities of maximizing the prediction performance in this big data and deep learning realm. In this article, we have proposed two separate deformable deep networks converting from the conventional convolutional neural network (CNN) and the state-of-the-art ResNet-50, to detect COVID-19 cases from chest CT images. The impact of the deformable concept has been observed through performance comparative analysis among the designed deformable and normal models, and it is found that the deformable models show better prediction results than their normal form. Furthermore, the proposed deformable ResNet-50 model shows better performance than the proposed deformable CNN model. The gradient class activation mapping (Grad-CAM) technique has been used to visualize and check the targeted regions' localization effort at the final convolutional layer and has been found excellent. Total 2481 chest CT images have been used to evaluate the performance of the proposed models with a train-valid-test data splitting ratio of 80 : 10 : 10 in random fashion. The proposed deformable ResNet-50 model achieved training accuracy of 99.5% and test accuracy of 97.6% with specificity of 98.5% and sensitivity of 96.5% which are satisfactory compared with related works. The comprehensive discussion demonstrates that the proposed deformable ResNet-50 model-based COVID-19 detection technique can be useful for clinical applications.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Big Data , Motion , Neural Networks, Computer
2.
Sensors (Basel) ; 22(15)2022 Aug 08.
Article in English | MEDLINE | ID: covidwho-1994140

ABSTRACT

This work proposes a mono-axial piezoelectric energy harvester based on the innovative combination of magnetic plucking and indirect impacts, e.g., impacts happening on the package of the harvester. The harvester exploits a permanent magnet placed on a non-magnetic mass, free to move within a predefined bounded region located in front of a piezoelectric bimorph cantilever equipped with a magnet as the tip mass. When the harvester is subjected to a low-frequency external acceleration, the moving mass induces an abrupt deflection and release of the cantilever by means of magnetic coupling, followed by impacts of the same mass against the harvester package. The combined effect of magnetic plucking and indirect impacts induces a frequency up-conversion. A prototype has been designed, fabricated, fastened to the wrist of a person by means of a wristband, and experimentally tested for different motion levels. By setting the magnets in a repulsive configuration, after 50 s of consecutive impacts induced by shaking, an energy of 253.41 µJ has been stored: this value is seven times higher compared to the case of harvester subjected to indirect impacts only, i.e., without magnetic coupling. This confirms that the combination of magnetic plucking and indirect impacts triggers the effective scavenging of electrical energy even from low-frequency non-periodical mechanical movements, such as human motion, while preserving the reliability of piezoelectric components.


Subject(s)
Electricity , Vibration , Humans , Motion , Reproducibility of Results
3.
Sensors (Basel) ; 22(13)2022 Jun 30.
Article in English | MEDLINE | ID: covidwho-1934199

ABSTRACT

Wheelchair users must use proper technique when performing sitting-pivot-transfers (SPTs) to prevent upper extremity pain and discomfort. Current methods to analyze the quality of SPTs include the TransKinect, a combination of machine learning (ML) models, and the Transfer Assessment Instrument (TAI), to automatically score the quality of a transfer using Microsoft Kinect V2. With the discontinuation of the V2, there is a necessity to determine the compatibility of other commercial sensors. The Intel RealSense D435 and the Microsoft Kinect Azure were compared against the V2 for inter- and intra-sensor reliability. A secondary analysis with the Azure was also performed to analyze its performance with the existing ML models used to predict transfer quality. The intra- and inter-sensor reliability was higher for the Azure and V2 (n = 7; ICC = 0.63 to 0.92) than the RealSense and V2 (n = 30; ICC = 0.13 to 0.7) for four key features. Additionally, the V2 and the Azure both showed high agreement with each other on the ML outcomes but not against a ground truth. Therefore, the ML models may need to be retrained ideally with the Azure, as it was found to be a more reliable and robust sensor for tracking wheelchair transfers in comparison to the V2.


Subject(s)
Wheelchairs , Arm , Biomechanical Phenomena , Motion , Reproducibility of Results
4.
Sensors (Basel) ; 21(24)2021 Dec 14.
Article in English | MEDLINE | ID: covidwho-1598728

ABSTRACT

Camera-based remote photoplethysmography (rPPG) is a low-cost and casual non-contact heart rate measurement method suitable for telemedicine. Several factors affect the accuracy of measuring the heart rate and heart rate variability (HRV) using rPPG despite HRV being an important indicator for healthcare monitoring. This study aimed to investigate the appropriate setup for precise HRV measurements using rPPG while considering the effects of possible factors including illumination, direction of the light, frame rate of the camera, and body motion. In the lighting conditions experiment, the smallest mean absolute R-R interval (RRI) error was obtained when light greater than 500 lux was cast from the front (among the following conditions-illuminance: 100, 300, 500, and 700 lux; directions: front, top, and front and top). In addition, the RRI and HRV were measured with sufficient accuracy at frame rates above 30 fps. The accuracy of the HRV measurement was greatly reduced when the body motion was not constrained; thus, it is necessary to limit the body motion, especially the head motion, in an actual telemedicine situation. The results of this study can act as guidelines for setting up the shooting environment and camera settings for rPPG use in telemedicine.


Subject(s)
Photoplethysmography , Telemedicine , Algorithms , Heart Rate , Motion
5.
Sensors (Basel) ; 21(24)2021 Dec 15.
Article in English | MEDLINE | ID: covidwho-1592135

ABSTRACT

Regular physical exercise is essential for overall health; however, it is also crucial to mitigate the probability of injuries due to incorrect exercise executions. Existing health or fitness applications often neglect accurate full-body motion recognition and focus on a single body part. Furthermore, they often detect only specific errors or provide feedback first after the execution. This lack raises the necessity for the automated detection of full-body execution errors in real-time to assist users in correcting motor skills. To address this challenge, we propose a method for movement assessment using a full-body haptic motion capture suit. We train probabilistic movement models using the data of 10 inertial sensors to detect exercise execution errors. Additionally, we provide haptic feedback, employing transcutaneous electrical nerve stimulation immediately, as soon as an error occurs, to correct the movements. The results based on a dataset collected from 15 subjects show that our approach can detect severe movement execution errors directly during the workout and provide haptic feedback at respective body locations. These results suggest that a haptic full-body motion capture suit, such as the Teslasuit, is promising for movement assessment and can give appropriate haptic feedback to the users so that they can improve their movements.


Subject(s)
Exercise , Movement , Feedback , Humans , Motion , Motor Skills
6.
Sensors (Basel) ; 21(8)2021 Apr 16.
Article in English | MEDLINE | ID: covidwho-1308430

ABSTRACT

Adopting effective techniques to automatically detect and identify small drones is a very compelling need for a number of different stakeholders in both the public and private sectors. This work presents three different original approaches that competed in a grand challenge on the "Drone vs. Bird" detection problem. The goal is to detect one or more drones appearing at some time point in video sequences where birds and other distractor objects may be also present, together with motion in background or foreground. Algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. In particular, three original approaches based on different deep learning strategies are proposed and compared on a real-world dataset provided by a consortium of universities and research centers, under the 2020 edition of the Drone vs. Bird Detection Challenge. Results show that there is a range in difficulty among different test sequences, depending on the size and the shape visibility of the drone in the sequence, while sequences recorded by a moving camera and very distant drones are the most challenging ones. The performance comparison reveals that the different approaches perform somewhat complementary, in terms of correct detection rate, false alarm rate, and average precision.


Subject(s)
Deep Learning , Algorithms , Animals , Birds , Motion
7.
Sensors (Basel) ; 21(12)2021 Jun 10.
Article in English | MEDLINE | ID: covidwho-1282569

ABSTRACT

Interest in autonomous vehicles (AVs) has significantly increased in recent years, but despite the huge research efforts carried out in the field of intelligent transportation systems (ITSs), several technological challenges must still be addressed before AVs can be extensively deployed in any environment. In this context, one of the key technological enablers is represented by the motion-planning and control system, with the aim of guaranteeing the occupants comfort and safety. In this paper, a trajectory-planning and control algorithm is developed based on a Model Predictive Control (MPC) approach that is able to work in different road scenarios (such as urban areas and motorways). This MPC is designed considering imitation-learning from a specific dataset (from real-world overtaking maneuver data), with the aim of getting human-like behavior. The algorithm is used to generate optimal trajectories and control the vehicle dynamics. Simulations and Hardware-In-the-Loop tests are carried out to demonstrate the effectiveness and computation efficiency of the proposed approach.


Subject(s)
Accidents, Traffic , Imitative Behavior , Algorithms , Humans , Motion , Technology
8.
Pituitary ; 24(4): 499-506, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1064567

ABSTRACT

PURPOSE: To determine the particle size, concentration, airborne duration and spread during endoscopic endonasal pituitary surgery in actual patients in a theatre setting. METHODS: This observational study recruited a convenience sample of three patients. Procedures were performed in a positive pressure operating room. Particle image velocimetry and spectrometry with air sampling were used for aerosol detection. RESULTS: Intubation and extubation generated small particles (< 5 µm) in mean concentrations 12 times greater than background noise (p < 0.001). The mean particle concentrations during endonasal access were 4.5 times greater than background (p = 0.01). Particles were typically large (> 75 µm), remained airborne for up to 10 s and travelled up to 1.1 m. Use of a microdebrider generated mean aerosol concentrations 18 times above baseline (p = 0.005). High-speed drilling did not produce aerosols greater than baseline. Pituitary tumour resection generated mean aerosol concentrations less than background (p = 0.18). Surgical drape removal generated small and large particles in mean concentrations 6.4 times greater than background (p < 0.001). CONCLUSION: Intubation and extubation generate large amounts of small particles that remain suspended in air for long durations and disperse through theatre. Endonasal access and pituitary tumour resection generate smaller concentrations of larger particles which are airborne for shorter periods and travel shorter distances.


Subject(s)
Aerosols/adverse effects , Endoscopy/adverse effects , Pituitary Neoplasms/surgery , Airway Extubation/adverse effects , Humans , Intubation, Intratracheal/adverse effects , Motion , Occupational Exposure/adverse effects , Occupational Health , Operating Rooms , Particle Size , Prospective Studies , Risk Assessment , Risk Factors , Time Factors
9.
J Acoust Soc Am ; 148(4): 2096, 2020 10.
Article in English | MEDLINE | ID: covidwho-901219

ABSTRACT

Brass wind instruments with long sections of cylindrical pipe, such as trumpets and trombones, sound "brassy" when played at a fortissimo level due to the generation of a shock front in the instrument. It has been suggested that these shock fronts may increase the spread of COVID-19 by propelling respiratory particles containing the SARS-CoV-2 virus several meters due to particle entrainment in the low pressure area behind the shocks. To determine the likelihood of this occurring, fluorescent particles, ranging in size from 10-50 µm, were dropped into the shock regions produced by a trombone, a trumpet, and a shock tube. Preliminary results indicate that propagation of small airborne particles by the shock fronts radiating from brass wind instruments is unlikely.


Subject(s)
Betacoronavirus/pathogenicity , Coronavirus Infections/transmission , Inhalation Exposure/prevention & control , Music , Pneumonia, Viral/transmission , Social Isolation , Aerosols , COVID-19 , Coronavirus Infections/prevention & control , Coronavirus Infections/virology , Equipment Design , Host-Pathogen Interactions , Humans , Motion , Pandemics/prevention & control , Particle Size , Pneumonia, Viral/prevention & control , Pneumonia, Viral/virology , SARS-CoV-2
10.
Sensors (Basel) ; 20(12)2020 Jun 23.
Article in English | MEDLINE | ID: covidwho-610868

ABSTRACT

The role of mobile robots for cleaning and sanitation purposes is increasing worldwide. Disinfection and hygiene are two integral parts of any safe indoor environment, and these factors become more critical in COVID-19-like pandemic situations. Door handles are highly sensitive contact points that are prone to be contamination. Automation of the door-handle cleaning task is not only important for ensuring safety, but also to improve efficiency. This work proposes an AI-enabled framework for automating cleaning tasks through a Human Support Robot (HSR). The overall cleaning process involves mobile base motion, door-handle detection, and control of the HSR manipulator for the completion of the cleaning tasks. The detection part exploits a deep-learning technique to classify the image space, and provides a set of coordinates for the robot. The cooperative control between the spraying and wiping is developed in the Robotic Operating System. The control module uses the information obtained from the detection module to generate a task/operational space for the robot, along with evaluating the desired position to actuate the manipulators. The complete strategy is validated through numerical simulations, and experiments on a Toyota HSR platform.


Subject(s)
Betacoronavirus , Coronavirus Infections/prevention & control , Disinfection/instrumentation , Pandemics/prevention & control , Pneumonia, Viral/prevention & control , Robotics/instrumentation , Algorithms , COVID-19 , Coronavirus Infections/transmission , Coronavirus Infections/virology , Deep Learning , Disinfection/methods , Equipment Design , Humans , Maintenance , Motion , Pneumonia, Viral/transmission , Pneumonia, Viral/virology , Robotics/methods , Robotics/statistics & numerical data , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL